Overview of Popular Benchmark Sets

نویسنده

  • Justin E. Harlow
چکیده

ONE OF THE MOST DIFFICULT TASKS CAD users face is the evaluation and comparison of different tools and algorithms. For commercial software purchasers, it is vital to understand how well a given tool does the required job and which of many possible choices is best for the kinds of problems a user will face. For the tool developer, whether in academia or industry, the efficiency of critical algorithms must be measured and compared to understand both tool behavior and progress over time. Over the years, there have been many attempts to create and use neutral benchmarks for tool evaluation and comparison. Typically, a benchmark set consists of a collection of circuits in a common format, which attempt to represent a range of problems for evaluating algorithms and tools within an important problem domain. In principle, if everyone uses the same test cases to evaluate similar tools, it should be straightforward to compare results— although this is rarely true in reality. When using benchmark sets, or interpreting results, keep the following things in mind: I Few real tools are as simple as the examples studied in algorithms courses. For instance, a sort algorithm does one specific job; its complexity may be calculated rigorously, and its real-world performance can easily be measured and compared to other algorithms designed for the same task. But what about a wire-routing algorithm? Here, there are also core algorithms whose theoretical complexity may be calculated, but in addition , there is an interplay of algorithms and constraints that blurs the picture. Theoretical complexity or benchmark performance of an algorithm often says little about the performance of an entire tool or flow. I It is a natural human tendency to choose test cases that demonstrate the good qualities of our tools, and to avoid those that do not. Beware of published results that cite a benchmark set, but do not report results for all members of the set. I Do not believe reported results until you verify the results by repeating the test. In many cases, there will be unspoken assumptions and boundary conditions that make it impossible for another party to achieve published results. I How do you know that the test case truly represents your problem? In most cases, real designs are not publicly used to evaluate tools because of proprietary concerns. Benchmark sets are surrogate circuits chosen to represent the kinds of problems a tool …

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Mining Fuzzy Classification Rules for Imbalanced Data

Fuzzy rule-based classification system (FRBCS) is a popular machine learning technique for classification purposes. One of the major issues when applying it on imbalanced data sets is its biased to the majority class, such that, it performs poorly in respect to the minority class. However many cases the minority classes are more important than the majority ones. In this paper, we have extended ...

متن کامل

On Mining Fuzzy Classification Rules for Imbalanced Data

Fuzzy rule-based classification system (FRBCS) is a popular machine learning technique for classification purposes. One of the major issues when applying it on imbalanced data sets is its biased to the majority class, such that, it performs poorly in respect to the minority class. However many cases the minority classes are more important than the majority ones. In this paper, we have extended ...

متن کامل

A FAST FUZZY-TUNED MULTI-OBJECTIVE OPTIMIZATION FOR SIZING PROBLEMS

The most recent approaches of multi-objective optimization constitute application of meta-heuristic algorithms for which, parameter tuning is still a challenge. The present work hybridizes swarm intelligence with fuzzy operators to extend crisp values of the main control parameters into especial fuzzy sets that are constructed based on a number of prescribed facts. Such parameter-less particle ...

متن کامل

Complexity Curve: a Graphical Measure of Data Complexity and Classifier Performance Supplementary document S2: Evaluating Classifier Performance with Generalisation Curves

We discussed the role of data complexity measures in the evaluation of classification algorithms performance. Knowing characteristics of benchmark data sets it is possible to check which algorithms perform well in the context of scarce data. To fully utilise this information, we present a graphical performance measure called generalisation curve. It is based on learning curve concept and allows...

متن کامل

presentation of a two stages method to determine the suitable benchmark and return to scale (case study: girls high school of one zone shiraz city)

In this paper, a two stages method to determine suitable benchmark and return scale of the decision making units set is presented. At first, all of the efficient reference set in no radial data envelopment analysis (DEA) based on linear programming is found. first, RAM model is introduced and units is investigated using this model, then, to run the given algorithm below steps is performed. At t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE Design & Test of Computers

دوره 17  شماره 

صفحات  -

تاریخ انتشار 2000